448 research outputs found

    Tomographic approach to resolving the distribution of LISA Galactic binaries

    Get PDF
    The space based gravitational wave detector LISA is expected to observe a large population of Galactic white dwarf binaries whose collective signal is likely to dominate instrumental noise at observational frequencies in the range 10^{-4} to 10^{-3} Hz. The motion of LISA modulates the signal of each binary in both frequency and amplitude, the exact modulation depending on the source direction and frequency. Starting with the observed response of one LISA interferometer and assuming only doppler modulation due to the orbital motion of LISA, we show how the distribution of the entire binary population in frequency and sky position can be reconstructed using a tomographic approach. The method is linear and the reconstruction of a delta function distribution, corresponding to an isolated binary, yields a point spread function (psf). An arbitrary distribution and its reconstruction are related via smoothing with this psf. Exploratory results are reported demonstrating the recovery of binary sources, in the presence of white Gaussian noise.Comment: 13 Pages and 9 figures high resolution figures can be obtains from http://www.phys.utb.edu/~rajesh/lisa_tomography.pd

    The application of compressive sampling to radio astronomy I: Deconvolution

    Full text link
    Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the H\"ogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.Comment: Published by A&A, Matlab code can be found: http://code.google.com/p/csra/download

    Subtraction of Bright Point Sources from Synthesis Images of the Epoch of Reionization

    Full text link
    Bright point sources associated with extragalactic AGN and radio galaxies are an important foreground for low frequency radio experiments aimed at detecting the redshifted 21cm emission from neutral hydrogen during the epoch of reionization. The frequency dependence of the synthesized beam implies that the sidelobes of these sources will move across the field of view as a function of observing frequency, hence frustrating line-of-sight foreground subtraction techniques. We describe a method for subtracting these point sources from dirty maps produced by an instrument such as the MWA. This technique combines matched filters with an iterative centroiding scheme to locate and characterize point sources in the presence of a diffuse background. Simulations show that this technique can improve the dynamic range of EOR maps by 2-3 orders of magnitude.Comment: 11 pages, 8 figures, 1 table, submitted to PAS

    LISA Data Analysis using MCMC methods

    Full text link
    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50,000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analyses and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we super-cool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions.Comment: 14 pages, 7 figure

    Observations of M87 and Hydra A at 90 GHz

    Full text link
    This paper presents new observations of the AGNs M87 and Hydra A at 90 GHz made with the MUSTANG bolometer array on the Green Bank Telescope at 8.5" resolution. A spectral analysis is performed combining this new data and archival VLA data on these objects at longer wavelengths. This analysis can detect variations in spectral index and curvature expected from energy losses in the radiating particles. M87 shows only weak evidence for steepening of the spectrum along the jet suggesting either re-acceleration of the relativistic particles in the jet or insufficient losses to affect the spectrum at 90 GHz. The jets in Hydra A show strong steepening as they move from the nucleus suggesting unbalanced losses of the higher energy relativistic particles. The difference between these two sources may be accounted for by the different lengths over which the jets are observable, 2 kpc for M87 and 45 kpc for Hydra A.Comment: 11 pages, submitted to Ap

    Multi-Frequency Synthesis of VLBI Images Using a Generalized Maximum Entropy Method

    Full text link
    A new multi-frequency synthesis algorithm for reconstructing images from multi-frequency VLBI data is proposed. The algorithm is based on a generalized maximum-entropy method, and makes it possible to derive an effective spectral correction for images over a broad frequency bandwidth, while simultaneously reconstructing the spectral-index distribution over the source. The results of numerical simulations demonstrating the capabilities of the algorithm are presented.Comment: 17 pages, 8 figure

    Long-Term Impact of Liming on Soil C and N in a Fertile Spruce Forest Ecosystem

    Get PDF
    Liming can counteract acidification in forest soils, but the effects on soil C and N pools and fluxes over long periods are less well understood. Replicated plots in an acidic and N-rich 40-year-old Norway spruce (Picea abies) forest in SW Sweden (Hasslov) were treated with 0, 3.45 and 8.75 Mg ha(-1)of dolomitic lime (D0, D2 and D3) in 1984. Between 1984 and 2016, soil organic C to 30 cm depth increased by 28 Mg ha(-1)(30% increase) in D0 and decreased by 9 Mg ha(-1)(9.4% decrease) in D3. The change in D2 was not significant (+ 2 Mg ha(-1)). Soil N pools changed proportionally to those in soil C pools. The C and N changes occurred almost exclusively in the top organic layer. Non-burrowing earthworms responded positively to liming and stimulated heterotrophic respiration in this layer in both D2 and D3. Burrowing earthworms in D3 further accelerated C and N turnover and loss of soil. The high soil C and N loss at our relatively N-rich site differs from studies of N-poor sites showing no C and N loss. Earthworms need both high pH and N-rich food to reach high abundance and biomass. This can explain why liming of N-rich soils often results in decreasing C and N pools, whereas liming of N-poor soils with few earthworms will not show any change in soil C and N. Extractable nitrate N was always higher in D3 than in D2 and D0. After 6 years (1990), potential nitrification was much higher in D3 (197 kg N ha(-1)) than in D0 (36 kg N ha(-1)), but this difference decreased during the following years, when also the unlimed organic layers showed high nitrification potential. Our experiment finds that high-dose liming of acidic N-rich forest soils produces an initial pulse of soil heterotrophic respiration and increases in earthworm biomass, which together cause long-term declines in soil C and N pools

    Analysing Astronomy Algorithms for GPUs and Beyond

    Full text link
    Astronomy depends on ever increasing computing power. Processor clock-rates have plateaued, and increased performance is now appearing in the form of additional processor cores on a single chip. This poses significant challenges to the astronomy software community. Graphics Processing Units (GPUs), now capable of general-purpose computation, exemplify both the difficult learning-curve and the significant speedups exhibited by massively-parallel hardware architectures. We present a generalised approach to tackling this paradigm shift, based on the analysis of algorithms. We describe a small collection of foundation algorithms relevant to astronomy and explain how they may be used to ease the transition to massively-parallel computing architectures. We demonstrate the effectiveness of our approach by applying it to four well-known astronomy problems: Hogbom CLEAN, inverse ray-shooting for gravitational lensing, pulsar dedispersion and volume rendering. Algorithms with well-defined memory access patterns and high arithmetic intensity stand to receive the greatest performance boost from massively-parallel architectures, while those that involve a significant amount of decision-making may struggle to take advantage of the available processing power.Comment: 10 pages, 3 figures, accepted for publication in MNRA

    On the reliability of polarization estimation using Rotation Measure Synthesis

    Get PDF
    We benchmark the reliability of the Rotation Measure (RM) synthesis algorithm using the 1005 Centaurus A field sources of Feain et al. (2009). The RM synthesis solutions are compared with estimates of the polarization parameters using traditional methods. This analysis provides verification of the reliability of RM synthesis estimates. We show that estimates of the polarization parameters can be made at lower S/N if the range of RMs is bounded, but reliable estimates of individual sources with unusual RMs require unconstrainted solutions and higher S/N. We derive from first principles the statistical properties of the polarization amplitude associated with RM synthesis in the presence of noise. The amplitude distribution depends explicitly on the amplitude of the underlying (intrinsic) polarization signal. Hence it is necessary to model the underlying polarization signal distribution in order to estimate the reliability and errors in polarization parameter estimates. We introduce a Bayesian method to derive the distribution of intrinsic amplitudes based on the distribution of measured amplitudes. The theoretically-derived distribution is compared with the empirical data to provide quantitative estimates of the probability that an RM synthesis solution is correct as a function of S/N. We provide quantitative estimates of the probability that any given RM synthesis solution is correct as a function of measured polarized amplitude and the intrinsic polarization amplitude compared to the noise.Comment: accepted for publication in the Astrophysical Journa

    The remnant of SN1987A revealed at (sub-)mm wavelengths

    Full text link
    Context: Supernova 1987A (SN1987A) exploded in the Large Magellanic Cloud (LMC). Its proximity and rapid evolution makes it a unique case study of the early phases in the development of a supernova remnant. One particular aspect of interest is the possible formation of dust in SN1987A, as SNe could contribute significantly to the dust seen at high redshifts. Aims: We explore the properties of SN1987A and its circumburst medium as seen at mm and sub-mm wavelengths, bridging the gap between extant radio and infrared (IR) observations of respectively the synchrotron and dust emission. Methods: SN1987A was observed with the Australia Telescope Compact Array (ATCA) at 3.2 mm in July 2005, and with the Atacama Pathfinder EXperiment (APEX) at 0.87 mm in May 2007. We present the images and brightness measurements of SN1987A at these wavelengths for the first time. Results: SN1987A is detected as an unresolved point source of 11.2 +/- 2.0 mJy at 3.2 mm (5" beam) and 21 +/- 4 mJy at 0.87 mm (18" beam). These flux densities are in perfect agreement with extrapolations of the powerlaw radio spectrum and modified-blackbody dust emission, respectively. This places limits on the presence of free-free emission, which is similar to the expected free-free emission from the ionized ejecta from SN1987A. Adjacent, fainter emission is observed at 0.87 mm extending ~0.5' towards the south-west. This could be the impact of the supernova progenitor's wind when it was still a red supergiant upon a dense medium. Conclusions: We have established a continuous spectral energy distribution for the emission from SN1987A and its immediate surroundings, linking the IR and radio data. This places limits on the contribution from ionized plasma. Our sub-mm image reveals complexity in the distribution of cold dust surrounding SN1987A, but leaves room for freshly synthesized dust in the SN ejecta.Comment: Accepted for publication in Astronomy and Astrophysics Letters on 28 April 2011. A better quality figure 1 can be had from http://www.astro.keele.ac.uk/~jacco/research/SN1987A087mm.ep
    • …
    corecore